36 research outputs found

    Projected gradient descent for non-convex sparse spike estimation

    Full text link
    We propose a new algorithm for sparse spike estimation from Fourier measurements. Based on theoretical results on non-convex optimization techniques for off-the-grid sparse spike estimation, we present a projected gradient descent algorithm coupled with a spectral initialization procedure. Our algorithm permits to estimate the positions of large numbers of Diracs in 2d from random Fourier measurements. We present, along with the algorithm, theoretical qualitative insights explaining the success of our algorithm. This opens a new direction for practical off-the-grid spike estimation with theoretical guarantees in imaging applications

    Relations entre le modèle d’image et le nombre de mesures pour une super-résolution fidèle

    Get PDF
    Multi-image super-resolution produces a high resolution image from severallow resolution acquisitions of a scene. This inverse problem can be ill-posed. We then needto use a regularity model on the scene to be able to produce a realistic image. However,modelization errors limit the performance of such methods. Consequently, finding conditionswhere the problem is well-posed is necessary to be able to limit the amount of regularizationwhen possible, and maximize the fidelity of the result. We ask ourselves the followingquestions :For noises with finite energy or ouliers, how many images permit the reconstruction ofa high resolution image close to the real scene ? How to maximize fidelity of regulariedmethods when the number of images is too small ?For measure noises, an asymptotic study of the conditioning guarantees that it is possibleto use an unregularized method if enough images are available. In cases closes to the criticalinversible case, which are not well-posed, we propose and validate a local estimator of theconditioning, which we use to limit the amount of regularization. For outliers, we use theequivalence between the sparse recovery problem and the robustness to outliers to calculatebounds for the robustness of super-resolution. We also study the regularized case and showconditions which increase the robustness of the problem. All these resuts are validated byexperiments.La super-résolution multi-image consiste à produire une image de haute résolution d’une scène à partir de plusieurs acquisitions de basse résolution. Le problème inverse correspondantpeut être mal posé, il faut alors avoir recours à un modèle de régularité sur le contenude la scène pour produire une image de haute résolution réaliste. Seulement, les erreurs demodélisation limitent la performance de telles méthodes. En conséquence, il est importantde déterminer les conditions dans lesquelles le problème est bien posé afin d’éviter derégulariser lorsque cela est possible, et ainsi maximiser la fidélité de la super-résolution. Onse pose donc les questions suivantes :Pour des bruits d’énergie finie ou une contamination par des données aberrantes (desupport fini), combien d’images permettent de produire une image de haute résolution fidèleà la réalité ? Comment maximiser la fidélité du résultat des méthodes régularisées lorsquele nombre d’images est insuffisant ?Dans le cas d’un bruit de mesure, une étude du comportement asymptotique du conditionnementdu système inverse valide la possibilité d’effectuer une super-résolution nonrégularisée lorsque le nombre d’images est suffisant. Par ailleurs, dans les cas prochesdu seuil critique d’inversibilité, qui sont les plus mal posés, nous proposons et validons unestimateur local du conditionnement qui nous permet de restreindre le plus possible le recoursà une régularisation. Dans le cas de données aberrantes, nous utilisons l’équivalenceentre le problème de reconstruction parcimonieuse et la résistance aux données aberrantespour démontrer des bornes de résistance aux données aberrantes du problème de superrésolution.Nous traitons aussi le cas de la régularisation et montrons les conditions souslesquelles celle-ci permet d’améliorer la robustesse du problème. Tous ces résultats sontvalidés par des expériences

    Phase Unmixing : Multichannel Source Separation with Magnitude Constraints

    Get PDF
    International audienceWe consider the problem of estimating the phases of K mixed complex signals from a multichannel observation, when the mixing matrix and signal magnitudes are known. This problem can be cast as a non-convex quadratically constrained quadratic program which is known to be NP-hard in general. We propose three approaches to tackle it: a heuristic method, an alternate minimization method, and a convex relaxation into a semi-definite program. The last two approaches are showed to outperform the oracle multichannel Wiener filter in under-determined informed source separation tasks, using simulated and speech signals. The convex relaxation approach yields best results, including the potential for exact source separation in under-determined settings

    PROJECTED GRADIENT DESCENT FOR NON-CONVEX SPARSE SPIKE ESTIMATION

    Get PDF
    We propose an algorithm to perform sparse spike estimation from Fourier measurements. Based on theoretical results on non-convex optimization techniques for off-the-grid sparse spike estimation, we present a simple projected descent algorithm coupled with an initialization procedure. Our algorithm permits to estimate the positions of large numbers of Diracs in 2d from random Fourier measurements. This opens the way for practical estimation of such signals for imaging applications as the algorithm scales well with respect to the dimensions of the problem. We present, along with the algorithm, theoretical qualitative insights explaining the success of our algorithm

    The basins of attraction of the global minimizers of non-convex inverse problems with low-dimensional models in infinite dimension

    Full text link
    Non-convex methods for linear inverse problems with low-dimensional models have emerged as an alternative to convex techniques. We propose a theoretical framework where both finite dimensional and infinite dimensional linear inverse problems can be studied. We show how the size of the the basins of attraction of the minimizers of such problems is linked with the number of available measurements. This framework recovers known results about low-rank matrix estimation and off-the-grid sparse spike estimation, and it provides new results for Gaussian mixture estimation from linear measurements. keywords: low-dimensional models, non-convex methods, low-rank matrix recovery, off-the-grid sparse recovery, Gaussian mixture model estimation from linear measurements

    Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations

    No full text
    The 1-norm was proven to be a good convex regularizer for the recovery of sparse vectors from under-determined linear measurements. It has been shown that with an appropriate measurement operator, a number of measurements of the order of the sparsity of the signal (up to log factors) is sufficient for stable and robust recovery. More recently, it has been shown that such recovery results can be generalized to more general low-dimensional model sets and (convex) regularizers. These results lead to the following question: to recover a given low-dimensional model set from linear measurements, what is the "best" convex regularizer? To approach this problem, we propose a general framework to define several notions of "best regularizer" with respect to a low-dimensional model. We show in the minimal case of sparse recovery in dimension 3 that the 1-norm is optimal for these notions. However, generalization of such results to the n-dimensional case seems out of reach. To tackle this problem, we propose looser notions of best regularizer and show that the 1-norm is optimal among weighted 1-norms for sparse recovery within this framework

    Stable recovery of low-dimensional cones in Hilbert spaces: One RIP to rule them all

    Get PDF
    International audienceMany inverse problems in signal processing deal with the robust estimation of unknown data from underdetermined linear observations. Low dimensional models, when combined with appropriate regularizers, have been shown to be efficient at performing this task. Sparse models with the 1-norm or low rank models with the nuclear norm are examples of such successful combinations. Stable recovery guarantees in these settings have been established using a common tool adapted to each case: the notion of restricted isometry property (RIP). In this paper, we establish generic RIP-based guarantees for the stable recovery of cones (positively homogeneous model sets) with arbitrary regularizers. These guarantees are illustrated on selected examples. For block structured sparsity in the infinite dimensional setting, we use the guarantees for a family of regularizers which efficiency in terms of RIP constant can be controlled, leading to stronger and sharper guarantees than the state of the art

    Phase Unmixing : Multichannel Source Separation with Magnitude Constraints

    Get PDF
    International audienceWe consider the problem of estimating the phases of K mixed complex signals from a multichannel observation, when the mixing matrix and signal magnitudes are known. This problem can be cast as a non-convex quadratically constrained quadratic program which is known to be NP-hard in general. We propose three approaches to tackle it: a heuristic method, an alternate minimization method, and a convex relaxation into a semi-definite program. The last two approaches are showed to outperform the oracle multichannel Wiener filter in under-determined informed source separation tasks, using simulated and speech signals. The convex relaxation approach yields best results, including the potential for exact source separation in under-determined settings
    corecore